1174 stories
·
0 followers

Google details new 24-hour process to sideload unverified Android apps

1 Share

Google is planning big changes for Android in 2026 aimed at combating malware across the entire device ecosystem. Starting in September, Google will begin restricting application sideloading with its developer verification program, but not everyone is on board. Android Ecosystem President Sameer Samat tells Ars that the company has been listening to feedback, and the result is the newly unveiled advanced flow, which will allow power users to skip app verification.

With its new limits on sideloading, Android phones will only install apps that come from verified developers. To verify, devs releasing apps outside of Google Play will have to provide identification, upload a copy of their signing keys, and pay a $25 fee. It all seems rather onerous for people who just want to make apps without Google's intervention.

Apps that come from unverified developers won't be installable on Android phones—unless you use the new advanced flow, which will be buried in the developer settings.

When sideloading apps today, Android phones alert the user to the "unknown sources" toggle in the settings, and there's a flow to help you turn it on. The verification bypass is different and will not be revealed to users. You have to know where this is and proactively turn it on yourself, and it's not a quick process. Here are the steps:

  • Enable developer options by tapping the software build number in About Phone seven times
  • In Settings > System, open Developer Options and scroll down to "Allow Unverified Packages."
  • Flip the toggle and tap to confirm you are not being coerced
  • Enter device unlock pin/password
  • Restart your device
  • Wait 24 hours
  • Return to the unverified packages menu at the end of the security delay
  • Scroll past additional warnings and select either "Allow temporarily" (seven days) or "Allow indefinitely."
  • Check the box confirming you understand the risks.
  • You can now install unverified packages on the device by tapping the "Install anyway" option in the package manager.

The actual legwork to activate this feature only takes a few seconds, but the 24-hour countdown makes it something you cannot do spur of the moment. But why 24 hours? According to Samat, this is designed to combat the rising use of high-pressure social engineering attacks, in which the scammer convinces the victim they have to install an app immediately to avoid severe consequences.

bypass advanced flow UI You'll have to wait 24 hours to bypass verification. Credit: Google

"In that 24-hour period, we think it becomes much harder for attackers to persist their attack," said Samat. "In that time, you can probably find out that your loved one isn't really being held in jail or that your bank account isn't really under attack."

But for people who are sure they don't want Google's verification system to get in the way of sideloading any old APK they come across, they don't have to wait until they encounter an unverified app to get started. You only have to select the "indefinitely" option once on a phone, and you can turn dev options off again afterward.

Choice vs. security

According to Samat, Google feels a responsibility to Android users worldwide, and things are different than they used to be with more than 3 billion active devices out there.

"For a lot of people in the world, their phone is their only computer, and it stores some of their most private information," Samat said. "Over the years, we've evolved the platform to keep it open while also keeping it safe. And I want to emphasize, if the platform isn't safe, people aren't going to use it, and that's a lose-lose situation for everyone, including developers."

But what does that safety look like? Google swears it's not interested in the content of apps, and it won't be checking proactively when developers register. This is only about identity verification—you should know when you're installing an app that it's not an imposter and does not come from known purveyors of malware. If a verified developer distributes malware, they're unlikely to remain verified. And what is malware? For Samat, malware in the context of developer verification is an application package that "causes harm to the user's device or personal data that the user did not intend."

So a rootkit can be malware, but a rootkit you downloaded intentionally because you want root access on your phone is not malware, from Samat's perspective. Likewise, an alternative YouTube client that bypasses Google's ads and feature limits isn't causing the kind of harm that would lead to issues with verification. But these are just broad strokes; Google has not commented on any specific apps.

Google says sideloading isn't going away, but it is changing. Credit: Google

Google is proceeding cautiously with the verification rollout, and some details are still spotty. Privacy advocates have expressed concern that verification will create a database that puts independent developers at risk of legal action. Samat says that Google does push back on judicial orders for user data when they are improper. The company further suggests it's not intending to create a permanent list of developer identities that would be vulnerable to legal demands. We've asked for more detail on what data Google retains from the verification process and for what length of time.

There is also concern that developers living in sanctioned nations might be unable to verify due to the required fee. Google notes that the verification process may vary across countries and was not created specifically to bar developers in places like Cuba or Iran. We've asked for details on how Google will handle these edge cases and will update if we learn more.

Rolling out in 2026 and beyond

Android users in most of the world don't have to worry about developer verification yet, but that day is coming. In September, verification enforcement will begin in Brazil, Singapore, Indonesia, and Thailand. Impersonation and guided scams are more common in these regions, so Google is starting there before expanding verification globally next year. Google has stressed that the advanced flow will be available before the initial rollout in September.

Google stands by its assertion that users are 50 times more likely to get malware outside Google Play than in it. A big part of the gap, Samat says, is Google's decision in 2023 to begin verifying developer identities in the Play Store. This provided a framework for universal developer verification. While there are certainly reasons Google might like the control verification gives it, the Android team has felt real pressure from regulators in areas with malware issues to address platform security.

"In a lot of countries, there is chatter about if this isn't safer, then there may need to be regulatory action to lock down more of this stuff," Samat told Ars Technica. "I don't think that it's well understood that this is a real security concern in a number of countries."

Google has already started delivering the verifier to devices around the world—it's integrated with Android 16.1, which launched late in 2025. Eventually, the verifier and advanced flow will be on all currently supported Android devices. However, the UI will be consistent, with Google providing all the components and scare screens. So what you see here should be similar to what appears on your phone in a few months, regardless of who made it.

Read full article

Comments



Read the whole story
Share this story
Delete

Trump staffs science and technology panel with non-scientists

1 Share

PCAST, the President’s Council of Advisors on Science and Technology, is generally not a high-profile group. It tends to be noticed when things go wrong, such as when the PCAST head named by Biden had to resign due to abusive behavior. Biden, who was generally supportive of science, didn't even name the members of PCAST until eight months after his inauguration. So it's no surprise that an administration that's been hostile to science took even longer to staff its version of the group.

The list of appointees was finally released on Wednesday, and it's notable for its almost complete absence of scientists. There are still nine unfilled vacancies on the council, so it's possible more scientists will be named later. But for now, PCAST is heavily tilted toward extremely wealthy technology figures.

These include investor Marc Andreessen, Google's Sergey Brin, Michael Dell of Dell, Larry Ellison of Oracle, Jensen Huang of NVIDIA, Lisa Su of AMD, and Mark Zuckerberg of Meta. But many of the lesser known names have similar backgrounds. Previously named chairs of PCAST are investor David Sacks and a former investment company CFO and current head of the Office of Science and Technology Policy, John Kratsios. Of the new appointees, Safra Catz also comes from Oracle, Fred Ehrsam co-founded Coinbase, and David Friedberg is another investor.

A few of the new members actually have some background in academic research. Both Jacob DeWitte and Bob Mumgaard got PhDs from MIT before founding nuclear companies: DeWitte is the CEO of the small modular nuclear startup Oklo, and Mumgaard is the CEO of Commonwealth Fusion Systems. Su has a PhD as well, although she's been in executive positions for many years. John Martinis is a Nobel Prize winner for his work on quantum physics; he played a critical role in the development of Google's quantum computing efforts and has since been involved in two additional quantum computing startups.

This is not the council you'd name if you were at all interested in the role of fundamental research in enabling technology development. It's more appropriate if your focus is on investing in well-proven commercial technologies. In keeping with that, the announcement says, "Under President Trump, PCAST will focus on topics related to the opportunities and challenges that emerging technologies present to the American workforce, and ensuring all Americans thrive in the Golden Age of Innovation."

While PCAST isn't a high-profile group, it can play a useful role in analyzing emerging science and technology that doesn't neatly fall within the remit of any single agency. You can get a sense of that by looking at the reports it prepared during the Obama administration, which addressed fundamental issues like antibiotic resistance and applied work like advanced manufacturing.

While this council appears to be poorly prepared to understand the needs and function of fundamental academic research, it's pretty clear that none of that is a priority for this administration, and naming academics to this group is unlikely to change that trajectory. So while there's still a chance that researchers could be named in the future, there may not be any useful role for them.

Read full article

Comments



Read the whole story
Share this story
Delete

What Made Bell Labs So Successful?

1 Share
Bell Labs "created many of the foundational innovations of the modern age," writes Jon Gertner, author of The Idea Factory: Bell Labs and the Great Age of American Innovation — from transistors and telecommunications satellites to Unix and the C programming language. But what was the secret to its success? he asks in a new article for the Wall Street Journal. Start with its lucky arrival in a "problem-rich" environment, suggests Arno Penzia, winner of one of Bell Labs' 11 Nobel Prizes: It was Bell Labs' responsibility, in other words, to create technologies for designing, expanding and improving an unruly communications network of cables and microwave links and glass fibers. The Labs also had to figure out ways to create underwater conduits, as well as switching centers that could manage the growing number of customers and escalating amounts of data.... Money mattered, too. Being connected to AT&T, the largest company in the world, was an advantage. The Labs' budget was enormous, and accounting conventions allowed its parent company to make huge and continuing investments in R & D. The generous funding, moreover, allowed scientists and engineers to buy and build expensive equipment — for instance, anechoic chambers to create the world's quietest rooms... The most fortunate part of Bell Labs' situation, however, was that in being attached to a monopoly it could partake in long-term thinking... Without competition nipping at its heels, Bell Labs engineers had the luxury of working out difficult ideas over decades. The first conceptualization of a cellular phone network, for instance, came out of the Labs in the late 1940s; it wasn't until the late 1970s that technicians began testing one in Chicago to gauge its potential. The challenge of deploying these technologies was immense. (The regulatory hurdles were formidable, too....) The article also credits the visionary management of Mervin Kelly — who fortunately also "had access to funding in a decade when most executives and universities didn't" to hire the brightest people. (By the early 1980s Bell Labs employed about 25,000 researchers, technicians and support staff, with an annual budget of $2 billion — roughly $7 billion in today's dollars.) "The Labs' involvement in World War II suggested to Kelly that an exciting postwar era of electronics was approaching, but that the technical problems would be so complex that they required a mix of expertise — not just physicists, but material scientists, chemists, electrical engineers, circuitry experts and the like." At Bell Labs, Kelly would sometimes handpick teams and create such a mix, as was the case for the transistor invention in the late 1940s. He came to see innovation arising not from like-minded or similarly trained people conversing with each other, but from a friction of ideas and approaches. It meant hiring researchers who had different personalities and favored a range of experimental angles. It also meant personally designing a campus in Murray Hill where departments were spread apart, so that scientists and engineers would be forced to walk, mingle and engage in serendipitous conversations and debate ideas. Meanwhile, under Kelly, the Labs focused on hiring people who were deeply curious, not just smart. Kelly saw it as his professional duty to do far more than what was expected, with his laboratory and vast resources, to create new technologies... The breakup of AT&T's monopoly, which led to a steady shrinking of Bell Labs' staff, budget and remit, shows us that no matter how forward looking your employees and managers may be, they will not necessarily see the future coming. It likewise suggests that technological progress is too unpredictable for one organization, no matter how powerful or smart, to control. Famously, Bell Labs managers didn't see value in the Arpanet, which eventually led to today's internet. And yet, for at least five decades, Bell Labs created a blueprint for the global development of communications and electronics. In understanding why it did so, I tend to think its ultimate secret may be hiding in plain sight. The secret has to do with Bell Labs' structure — not only being connected to a fabulously profitable monopoly, but being connected to a company that could move theoretical and applied research into a huge manufacturing division that made telecom equipment (at Western Electric) and ultimately into a dynamic operating system (the AT&T network)... Scientists and engineers at the Labs understood their ideas would be implemented, if they passed muster, into the huge system its parent company was running. Bell Labs racked up about 30,000 patents, according to the article, and celebrated its 100th anniversary last April. It is now part of Finland-based Nokia.

Read more of this story at Slashdot.

Read the whole story
Share this story
Delete

C++26 is done! — Trip report: March 2026 ISO C++ standards meeting (London Croydon, UK)

1 Share

News flash: C++26 is done! 🎉

On Saturday, the ISO C++ committee completed technical work on C++26 in (partly) sunny London Croydon, UK. We resolved the remaining international comments on the C++26 draft, and are now producing the final document to be sent out for its international approval ballot (Draft International Standard, or DIS) and final editorial work, to be published in the near future by ISO.

This meeting was hosted by Phil Nash of Shaved Yaks, and the Standard C++ Foundation. Our hosts arranged for high-quality facilities for our six-day meeting from Monday through Saturday. We had about 210 attendees, about 130 in-person and 80 remote via Zoom, formally representing 24 nations. At each meeting we regularly have new guest attendees who have never attended before, and this time there were 24 new guest attendees, mostly in-person, in addition to new attendees who are official national body representatives. To all of them, once again welcome!

The committee currently has 23 active subgroups, nine of which met in 6 parallel tracks throughout the week. Some groups ran all week, and others ran for a few days or a part of a day, depending on their workloads. We had three technical evening sessions on C++ compiler/library implementations, memory safety, and quantities/units. You can find a brief summary of ISO procedures here.

C++26 is complete: The most compelling release since C++11

Per our published C++26 schedule, this was our final meeting to finish technical work on C++26. No features were added or removed; we just handled fit-and-finish issues and primarily focused on finishing addressing the 411 national body comments we received in the summer’s international comment ballot (Committee Draft, or CD).

If you’re wondering “what are the Big Reasons why should I care about C++26?” then the best place to start is with the C++26 Fab Four Features…

(1) Reflection, reflection, reflection

Reflection is by far the biggest upgrade for C++ development that we’ve shipped since the invention of templates. For details, see my June 2025 trip report and my September 2025 CppCon keynote: “Reflection: C++’s decade-defining rocket engine.” From the talk abstract:

In June 2025, C++ crossed a Rubicon: it handed us the keys to its own machinery. For the first time, C++ can describe itself—and generate more. The first compile-time reflection features in draft C++26 mark the most transformative turning point in our language’s history by giving us the most powerful new engine for expressing efficient abstractions that C++ has ever had, and we’ll need the next decade to discover what this rocket can do.”

(2) Less UB for more memory safety: C++ code is more memory safe just by recompiling as C++26

C++26 has important memory safety improvements that you get just by recompiling your existing C++ code with no changes. The improvements come in two major ways.

No more undefined behavior (UB) for reading uninitialized local variables. This whole category of potential vulnerabilities disappears in C++26, just by recompiling your code as C++26. For more details, see my March 2025 trip report.

The hardened standard library provides initial cross-platform library security guarantees, including bounds safety for dozens of the most widely used bounded operations on common standard types, including vector, span, string, string_view, and more. For details, see my February 2025 trip report and run (don’t walk) to read the November 2025 ACM Queue article “Practical Security in Production: Hardening the C++ Standard Library at Massive Scale” to learn how this is already deployed across Apple platforms and Google services, hundreds of millions of lines of code, with on average 0.3% (a fraction of 1%) performance overhead. From the paper:

“The final tally after the rollout was remarkable. Across hundreds of millions of lines of C++ at Google, only five services opted out entirely because of reliability or performance concerns. Work is ongoing to eliminate the need for these few remaining exceptions, with the goal of reaching universal adoption.

Even more telling, the fine-grained API [to opt out] for unsafe access was used in just seven distinct places, all of which were surgical changes made by the security team to reclaim performance in code that was correct but difficult for the compiler to analyze. This widespread adoption stands as the strongest possible testament to the practicality of the hardening checks in real-world production environments.”

This is no just-on-paper design. At Google alone, it has already fixed over 1,000 bugs, is projected to prevent 1,000 to 2,000 bugs a year, and has reduced the segfault rate across the production fleet by 30%.

And, now, it is standardized for everyone in C++26. Thank you, Apple and Google and all the standard library implementers!

(3) Contracts for functional safety: pre, post, contract_assert

In C++26, we also have language contracts: preconditions and postconditions on function declarations and a language-supported assertion statement, all of which are infinitely better than C’s assert macro.

Note that some smart and respected ISO C++ committee experts have sustained technical concerns about contracts. For my summary of the contracts feature and all the major repeated concerns (with my opinions about them, which could be wrong!), see my CppCon 2025 contracts talk. In February 2025 when we took the plenary poll to adopt contracts in the C++26 working draft (“merge it to trunk”), the vote was:

100 in favor, 14 opposed, and 12 abstaining

Since then, the concerns have all been deeply rediscussed for the last three meetings thanks to thoughtful and high-quality technical papers; all of those papers have continued to be fully heard, often for multiple days and at many telecons between meetings. At our previous meeting in November 2025, we did fix a couple of contracts specification bugs thanks to this feedback! Yesterday, when we took the plenary poll to finalize and ship the C++26 standard, the vote was non-unanimous because of the sustained concerns about contracts:

114 in favor, 12 opposed, and 3 abstaining

The unusually low number of abstentions shows that virtually all our experts now feel sure about their technical opinions, either for or against contracts. After extensive scrutiny, the committee’s opinion is clear: The ISO C++ committee still wants contracts, and so contracts have stayed in C++26.

(4) std::execution (aka “Sender/Receiver”)

std::execution is what I call “C++’s async model”: It provides a unified framework to express and control concurrency and parallelism. For details, see my July 2024 trip report. It also happens to have some important safety properties because it makes it easier to write programs that use structured (rigorously lifetime-nested) concurrency and parallelism to be data-race-free by construction. That’s a big deal.

I do want to write a warning though: This feature is great (my company has already been using it in production) but it’s currently harder to adopt than most C++ features because it lacks great documentation and is missing some “fingers-and-toes” libraries. For now, do expect great results from std::execution, but also expect to spend some initial time on learning with the help of a friend who already knows it well, and to write a few helper adapter libraries to integrate std::execution with your existing async code.

C++26 adoption will be fast

There are two reasons I expect C++26 to be adopted in industry very quickly compared with C++17, C++20, and C++23.

First, user demand for this feature set is extraordinarily high. C++11 was our last “big and impactful” release that was chock full of features that the vast majority of C++ developers would use daily (auto, range-for, lambdas, smart pointers, move semantics, threads, mutexes, …). Since then, our followup triennial standards have also had some ‘big’ features like parallel STL, concepts, coroutines, and modules, but the reality is that those weren’t as massively impactful for all C++ developers as C++11’s features were, or as C++26’s marquee features of reflection and safety hardening will be now. So even if your company has been slow to enable the C++20 switch, I think you’ll find they’ll be much faster to enable C++26. There’s just so much more high-demand value that makes it an exciting and exceptionally useful release for everyone who uses C++.

Second, conforming compiler and standard library implementations are coming quickly. Throughout the development of C++26, at any given point both GCC and Clang had already implemented two-thirds of C++26 features. Today, GCC already has reflection and contracts merged in trunk, awaiting release.

Work on C++29, especially on more memory safety and profiles

At this meeting we also adopted the schedule for C++29, which will be another three-year release cycle. To no one’s surprise, a major focus of the discussion about C++29-timeframe material was about further increasing memory safety.

This week, the main language evolution subgroup (EWG) reviewed updates on several ongoing proposals in the area of further type/memory-safety improvements for C++29, including: to pursue proposals to further reduce undefined behavior to be seen again in EWG for possible inclusion in C++29; and to pursue further development of safety profile papers in the safety and security subgroup (SG23) to then be brought to EWG targeting C++29. SG23 specifically worked on Bjarne Stroustrup’s P3984 type safety profile using the proposed general profiles framework by Gabriel Dos Reis.

Besides those sessions, type and memory safety was extensively discussed in two additional large-attendance sessions: an evening session on Wednesday night attended by the majority of the committee, and in an EWG memory safety dedicated session all Friday afternoon attended by about 90 experts. In particular, I want to thank Oliver Hunt of Apple for presenting the practical experience report P4158R0, “Subsetting and restricting C++ for memory safety,” reporting how WebKit hardened over 4 million lines of code using a subset-of-superset approach (like Stroustrup’s Profiles) and showing how that has already made a profound difference in the security of C++ code in WebKit at scale. Here are a few highlights from the introduction slide (emphasis added):

Adopted ~4MLoC of code (i.e. w/o comments, tests, external libs, etc)

Closes multiple classes of vulnerabilities, current policies would have prevented majority of historical exploits

Found (and prevented exploitability) of new and existing bugs”

C++29 is already set to build even more on the safety improvements already in C++26, and I for one welcome our new era of safer-by-default-and-still-zero-overhead-efficient-C++ overlords. C++26 is the first step into a fundamentally new era: This isn’t our grandparents’ wild-west UB-filled anything-goes C++ anymore. But even as C++ moves to being more memory-safe by default, it’s staying true to C++’s enduring core of the zero-overhead principle… you don’t pay for what you don’t use, and even when some safety is so cheap that we can turn it on by default in the language you will always have a way to opt out when you need to get the last ounce of performance in that hot path or inner loop.

We did other work, including other things related to functional safety: EWG reviewed additional plans for applying contract checks in the language and standard library. The numerics subgroup (SG6) and library incubation subgroup (SG18) progressed P3045R7, “Quantities and units library” by Mateusz Pusz, Dominik Berner, Johel Ernesto Guerrero Peña, Chip Hogg, Nicolas Holthaus, Roth Michaels and Vincent Reverdy to the main library evolution subgroup (LEWG), and we had an evening session on the topic for the whole committee on Thursday evening. I encourage reading section 7.1 “Safety concerns” in the paper, including how technology like that in this paper could have improved Black Sabbath’s 1983 tour (which was hilariously lampooned in, of course, This Is Spinal Tap).

What’s next

Our next two meetings will be in Brno, Czechia in June and in Búzios, Rio de Janeiro, Brazil in November. At those two meetings we will start work on adding features into the new C++29 working draft.

Wrapping up

Thank you again to the about 210 experts who attended on-site and on-line at this week’s meeting, and the many more who participate in standardization through their national bodies!

But we’re not slowing down… we’ll continue to have subgroup Zoom meetings, and then in less than three months from now we’ll be meeting again in Czechia and online to start adding features to C++29, with many subgroup telecons already scheduled between now and then. Thank you again to everyone reading this for your interest and support for C++ and its standardization.



Read the whole story
Share this story
Delete

What if a dialog wants to intercept its own message loop?

2 Shares

So far, we’ve been looking at how a dialog box owner can customize the dialog message loop. But what about the dialog itself? Can the dialog customize its own dialog message loop?

Sure. It just has to steal the messages from its owner.

The dialog box can subclass its owner and grab the WM_ENTER­IDLE message. Now, maybe it should be careful only to grab WM_ENTER­IDLE messages that were triggered by that dialog and not accidentally grab messages that were triggered by other dialogs.

HANDLE hTimer;

LRESULT CALLBACK EnterIdleSubclassProc(HWND hwnd, UINT message,
    WPARAM wParam, LPARAM lParam, UINT_PTR id,
    [[maybe_unused]] DWORD_PTR data)
{
    if (message == WM_ENTERIDLE &amp:&
        wParam == MSGF_DIALOGBOX &&
        (HWND)lParam == (HWND)id) {
        return SendMessage(hdlg, message, wParam, lParam);
    } else {
        return DefSubclassProc(hwnd, message, wParam, lParam);
    }
}

INT_PTR CALLBACK DialogProc(HWND hdlg, UINT message, WPARAM wParam, LPARAM lParam)
{
    LARGE_INTEGER twoSeconds;

    switch (message) {
    ⟦ ... ⟧

    case WM_INITDIALOG:
        hTimer = CreateWaitableTimerW(nullptr, FALSE, nullptr);
        twoSeconds.QuadPart = -2 * wil::filetime_duration::one_second;
        SetWaitableTimer(h, &twoSeconds, 2000, nullptr, nullptr, FALSE);
        SetWindowSubclass(GetParent(hdlg), EnterIdleSubclassProc,
                          (UINT_PTR)hdlg, 0);
        ⟦ other dialog box setup ⟧
        return TRUE;

    case WM_ENTERIDLE:
        OnEnterIdle(hdlg, (UINT)wParam, (HWND)lParam);
        return 0;

    ⟦ ... ⟧
    }

    return FALSE;
}

When the dialog box initializes, we create the periodic waitable timer (for demonstration purposes) and also subclass our owner window with the Enter­Idle­Subclass­Proc. We use the dialog window handle as the ID for two reasons. First, it lets us pass a parameter to the subclass procedure so it knows which dialog box it is working on behalf of. (We could also have passed it as the data parameter.) More importantly, it allows multiple dialogs to use the Enter­Idle­Subclass­Proc to subclass their owner, and the multiple subclasses won’t conflict with each other.

The subclass procedure checks whether it is a WM_ENTER­IDLE, marked as coming from a dialog box message loop, and where the dialog box handle is the one we have. If so, then we forward the WM_ENTER­IDLE back into the dialog for processing. That processing consists of using the On­Enter­Idle function we created at the start of the series, which processes waitable timer events while waiting for a message to arrive.

Okay, but should we be careful to grab WM_ENTER­IDLE messages only if they correspond to our dialog box? Because if the owner displays some other modal dialog box while our dialog is up (not really a great idea, but hey, weirder things have happened), then we still want to process our waitable timer events. But on the other hand, maybe that other dialog wants to customize the message loop in a different way. Probably best to steal messages only if they originated from our dialog box.

The post What if a dialog wants to intercept its own message loop? appeared first on The Old New Thing.

Read the whole story
Share this story
Delete

Kidney failure case reported in raw cheese outbreak; maker still denies link

1 Share

Two more illnesses have been identified in an E. coli outbreak linked to unpasteurized cheese and milk, the Food and Drug Administration reported Thursday. The maker of the products, California-based Raw Farm, continues to deny the link and has refused to issue a recall.

According to the FDA, at least nine people have been sickened in three states, an increase of two cases since the outbreak was announced earlier this month. Three of the nine cases required hospitalization, and one person developed a life-threatening complication called Hemolytic uremic syndrome, or HUS, which causes a type of kidney failure.

Outbreak investigators have interviewed eight of the nine people sickened. All eight reported consuming unpasteurized dairy. One person couldn't recall a brand, but the remaining seven all singled out products from Raw Farm. Five people ate Raw Farm's raw cheddar, and two drank Raw Farm's raw milk. Whole genome sequencing of the E. coli isolates from the patients shows high similarity, suggesting they came from a common source.

The FDA highlighted that the people sickened in this outbreak are young, with over half being less than 5 years old. Children under 5 are particularly vulnerable to severe complications, including HUS, from the type of E. coli in this outbreak, which is a Shiga toxin-producing E. coli, or STEC.

In an infection, STEC makes its way into the intestines and burrows into the mucous layer, where it starts secreting toxin. The toxin can bind to a receptor on certain cells (Gb3) and shut down protein production, causing the cell to die and triggering inflammation. In the progression to HUS, the toxin gets into the bloodstream and takes is cell-killing abilities body-wide. But the tiny blood vessels in the kidney—which have the highest prevalence of receptor Gb3—are most vulnerable. Flooded with toxin, the kidney's small blood vessels become damaged, red blood cells start bursting, platelets form clots, and the vessels start shutting down entirely, causing parts of the kidney to die and the body to run low on red blood cells and platelets.

Dangerous contamination

For patients, the symptoms of HUS can develop from abdominal pain and bloody diarrhea to signs of kidney failure, including less urine production, blood in the urine, and fluid overload. Signs of anemia (paleness, fatigue, fainting) and low platelets (easy bleeding and bruising) may also be present. With supportive treatment, the fatality rate is only about 5 percent, but up to 25 percent of cases will develop long-term kidney problems.

One of the most common causes of HUS is drinking STEC-contaminated, unpasteurized milk—which is milk that hasn't been briefly heated to kill off disease-causing microbes, like STEC. Cattle asymptomatically carry STEC and other pathogens in their digestive systems, and those germs can easily transfer to milk during the milking process. Cheeses made from unpasteurized milk can be aged to reduce—but not eliminate—pathogenic bacteria. FDA testing of a survey of various cheeses aged 60 days on the market found that the pathogen contamination rate was less than 1 percent. The raw cheese linked to the current outbreak is labeled as having been aged a minimum of 60 days.

Raw Farm, a high-profile anti-pasteurization dairy producer, has been linked to over a dozen other outbreaks and many recalls in the last 20 years, including a Salmonella outbreak in 2024 that included at least 171 illnesses. But in repeated social media posts, Raw Farm owner Aaron McAfee (son of founder Mark McAfee) has rejected any responsibility for the illnesses and says they "100% disagree" with the FDA's findings.

In a social media post Thursday, McAfee touts that testing of their dairy came back negative for STEC. But food safety experts have cautioned that low-level contamination can be very difficult to detect in sample testing. For instance, detecting a single bacterium in a 25-gram food sample is equivalent to detecting 1 in 5 trillion parts or "a small needle in multiple haystacks," according to a food testing review published last year. The review also notes that negative sampling doesn't mean the rest of the batch is free of pathogens, as disease-causing germs are often distributed unevenly in batches of foods.

Read full article

Comments



Read the whole story
Share this story
Delete
Next Page of Stories